Goto

Collaborating Authors

 analog computing


Energy-based learning algorithms for analog computing: a comparative study

Neural Information Processing Systems

Energy-based learning algorithms have recently gained a surge of interest due to their compatibility with analog (post-digital) hardware. Existing algorithms include contrastive learning (CL), equilibrium propagation (EP) and coupled learning (CpL), all consisting in contrasting two states, and differing in the type of perturbation used to obtain the second state from the first one. However, these algorithms have never been explicitly compared on equal footing with same models and datasets, making it difficult to assess their scalability and decide which one to select in practice. In this work, we carry out a comparison of seven learning algorithms, namely CL and different variants of EP and CpL depending on the signs of the perturbations. Specifically, using these learning algorithms, we train deep convolutional Hopfield networks (DCHNs) on five vision tasks (MNIST, F-MNIST, SVHN, CIFAR-10 and CIFAR-100).


Energy-based learning algorithms for analog computing: a comparative study

Neural Information Processing Systems

Energy-based learning algorithms have recently gained a surge of interest due to their compatibility with analog (post-digital) hardware. Existing algorithms include contrastive learning (CL), equilibrium propagation (EP) and coupled learning (CpL), all consisting in contrasting two states, and differing in the type of perturbation used to obtain the second state from the first one. However, these algorithms have never been explicitly compared on equal footing with same models and datasets, making it difficult to assess their scalability and decide which one to select in practice. In this work, we carry out a comparison of seven learning algorithms, namely CL and different variants of EP and CpL depending on the signs of the perturbations. Specifically, using these learning algorithms, we train deep convolutional Hopfield networks (DCHNs) on five vision tasks (MNIST, F-MNIST, SVHN, CIFAR-10 and CIFAR-100).


Synergistic Development of Perovskite Memristors and Algorithms for Robust Analog Computing

Ye, Nanyang, Sun, Qiao, Wang, Yifei, Yang, Liujia, Zhou, Jundong, Wang, Lei, Yang, Guang-Zhong, Wang, Xinbing, Zhou, Chenghu, Ren, Wei, Gu, Leilei, Wu, Huaqiang, Gu, Qinying

arXiv.org Artificial Intelligence

Analog computing using non-volatile memristors has emerged as a promising solution for energy-efficient deep learning. New materials, like perovskites-based memristors are recently attractive due to their cost-effectiveness, energy efficiency and flexibility. Yet, challenges in material diversity and immature fabrications require extensive experimentation for device development. Moreover, significant non-idealities in these memristors often impede them for computing. Here, we propose a synergistic methodology to concurrently optimize perovskite memristor fabrication and develop robust analog DNNs that effectively address the inherent non-idealities of these memristors. Employing Bayesian optimization (BO) with a focus on usability, we efficiently identify optimal materials and fabrication conditions for perovskite memristors. Meanwhile, we developed "BayesMulti", a DNN training strategy utilizing BO-guided noise injection to improve the resistance of analog DNNs to memristor imperfections. Our approach theoretically ensures that within a certain range of parameter perturbations due to memristor non-idealities, the prediction outcomes remain consistent. Our integrated approach enables use of analog computing in much deeper and wider networks, which significantly outperforms existing methods in diverse tasks like image classification, autonomous driving, species identification, and large vision-language models, achieving up to 100-fold improvements. We further validate our methodology on a 10$\times$10 optimized perovskite memristor crossbar, demonstrating high accuracy in a classification task and low energy consumption. This study offers a versatile solution for efficient optimization of various analog computing systems, encompassing both devices and algorithms.


What's Old Is New Again

Communications of the ACM

What's old is new again. At least, it is if we are talking about analog computing. The moment you hear the phrase "analog computing," you might be forgiven for thinking we are talking about the hipsters of the technology world. The people who prefer vinyl over Spotify. The ones that want to bring back typewriters to replace word processors, or the folks who prize handwritten notes over those generated by ChatGPT.


Training Neural Networks for Execution on Approximate Hardware

Li, Tianmu, Li, Shurui, Gupta, Puneet

arXiv.org Artificial Intelligence

Approximate computing methods have shown great potential for deep learning. Due to the reduced hardware costs, these methods are especially suitable for inference tasks on battery-operated devices that are constrained by their power budget. However, approximate computing hasn't reached its full potential due to the lack of work on training methods. In this work, we discuss training methods for approximate hardware. We demonstrate how training needs to be specialized for approximate hardware, and propose methods to speed up the training process by up to 18X.


Breaking the scaling limits of analog computing

#artificialintelligence

As machine-learning models become larger and more complex, they require faster and more energy-efficient hardware to perform computations. Conventional digital computers are struggling to keep up. An analog optical neural network could perform the same tasks as a digital one, such as image classification or speech recognition, but because computations are performed using light instead of electrical signals, optical neural networks can run many times faster while consuming less energy. However, these analog devices are prone to hardware errors that can make computations less precise. Microscopic imperfections in hardware components are one cause of these errors.


Analog A.I.? It sounds crazy, but it might be the future

#artificialintelligence

The future of A.I. is … analog? At least, that's the assertion of Mythic, an A.I. chip company that, in its own words, is taking "a leap forward in performance in power" by going back in time. Before ENIAC, the world's first room-sized programmable, electronic, general-purpose digital computer, buzzed to life in 1945, arguably all computers were analog -- and had been for as long as computers have been around. Analog computers are a bit like stereo amps, using variable range as a way of representing desired values. In an analog computer, numbers are represented by way of currents or voltages, instead of the zeroes and ones that are used in a digital computer.


Analog A.I.? It sounds crazy, but it might be the future

#artificialintelligence

The future of A.I. is ... analog? Before ENIAC, the world's first room-sized programmable, electronic, general-purpose digital computer, buzzed to life in 1945, arguably all computers were analog -- and had been for as long as computers have been around. Analog computers are a bit like stereo amps, using variable range as a way of representing desired values. In an analog computer, numbers are represented by way of currents or voltages, instead of the zeroes and ones that are used in a digital computer. While ENIAC represented the beginning of the end for analog computers, in fact, analog machines stuck around in some form until the 1950s or 1960s when digital transistors won out.


AI Overcomes Stumbling Block on Brain-Inspired Hardware

#artificialintelligence

Today's most successful artificial intelligence algorithms, artificial neural networks, are loosely based on the intricate webs of real neural networks in our brains. But unlike our highly efficient brains, running these algorithms on computers guzzles shocking amounts of energy: The biggest models consume nearly as much power as five cars over their lifetimes. Enter neuromorphic computing, a closer match to the design principles and physics of our brains that could become the energy-saving future of AI. Instead of shuttling data over long distances between a central processing unit and memory chips, neuromorphic designs imitate the architecture of the jelly-like mass in our heads, with computing units (neurons) placed next to memory (stored in the synapses that connect neurons). To make them even more brain-like, researchers combine neuromorphic chips with analog computing, which can process continuous signals, just like real neurons.


The case for an AI that puts nature and ethics first, not humans

#artificialintelligence

Did you know TNW Conference has a track fully dedicated to bringing the biggest names in tech to showcase inspiring talks from those driving the future of technology this year? Tim Leberecht, who authored this piece, is one of the speakers. Check out the full'Impact' program here. On July 20, 1969, the first human landed on the moon. Fifty years later we are in desperate need for another "moonshot" to tackle some of the pressing and overwhelmingly big issues of our time -- from the climate crisis to the decline of democracy to the upheavals to our labor markets and societies caused by the rise of exponential digital technology -- especially Artificial Intelligence (AI). For the past decade, we put our faith in technology as the ultimate problem-solver, and any kind of innovation was tied to technological advances.